<?xml version="1.0" encoding="ISO-8859-1"?>
<metadatalist>
	<metadata ReferenceType="Conference Proceedings">
		<site>sibgrapi.sid.inpe.br 802</site>
		<holdercode>{ibi 8JMKD3MGPEW34M/46T9EHH}</holdercode>
		<identifier>8JMKD3MGPBW34M/38737C8</identifier>
		<repository>sid.inpe.br/sibgrapi/2010/09.02.12.19</repository>
		<lastupdate>2010:09.02.12.19.13 sid.inpe.br/banon/2001/03.30.15.38 administrator</lastupdate>
		<metadatarepository>sid.inpe.br/sibgrapi/2010/09.02.12.19.13</metadatarepository>
		<metadatalastupdate>2022:06.14.00.06.56 sid.inpe.br/banon/2001/03.30.15.38 administrator {D 2010}</metadatalastupdate>
		<doi>10.1109/SIBGRAPI.2010.55</doi>
		<citationkey>AlvesHash:2010:TeReEx</citationkey>
		<title>Text Regions Extracted from Scene Images by Ultimate Attribute Opening and Decision Tree Classification</title>
		<format>Printed, On-line.</format>
		<year>2010</year>
		<numberoffiles>1</numberoffiles>
		<size>1056 KiB</size>
		<author>Alves, Wonder Alexandre Luz,</author>
		<author>Hashimoto, Ronaldo Fumio,</author>
		<affiliation>Institute of Mathematics and Statistics - University of São Paulo</affiliation>
		<affiliation>Institute of Mathematics and Statistics - University of São Paulo</affiliation>
		<editor>Bellon, Olga,</editor>
		<editor>Esperança, Claudio,</editor>
		<e-mailaddress>wonder@ime.usp.br</e-mailaddress>
		<conferencename>Conference on Graphics, Patterns and Images, 23 (SIBGRAPI)</conferencename>
		<conferencelocation>Gramado, RS, Brazil</conferencelocation>
		<date>30 Aug.-3 Sep. 2010</date>
		<publisher>IEEE Computer Society</publisher>
		<publisheraddress>Los Alamitos</publisheraddress>
		<booktitle>Proceedings</booktitle>
		<tertiarytype>Full Paper</tertiarytype>
		<transferableflag>1</transferableflag>
		<versiontype>finaldraft</versiontype>
		<keywords>scene-text localization, connected component approach, text   information extraction, residual operator.</keywords>
		<abstract>In this work we propose a method for localizing text regions within scene images consisting of two major stages.  In the first stage, a set of potential text regions is extracted from the input image using residual operators (such as ultimate attribute opening and closing). In the second stage a set of features is obtained from each potential text region and this feature set will be later used as an input to a decision tree classifier in order to label these regions as text or non-text regions. Experiments performed using images from ICDAR public dataset show that this method is a good alternative for problems involving text location in scene images.</abstract>
		<language>en</language>
		<targetfile>paper_final.pdf</targetfile>
		<usergroup>wonder@ime.usp.br</usergroup>
		<visibility>shown</visibility>
		<nexthigherunit>8JMKD3MGPEW34M/46SJT6B</nexthigherunit>
		<nexthigherunit>8JMKD3MGPEW34M/4742MCS</nexthigherunit>
		<citingitemlist>sid.inpe.br/sibgrapi/2022/05.14.20.21 5</citingitemlist>
		<hostcollection>sid.inpe.br/banon/2001/03.30.15.38</hostcollection>
		<lasthostcollection>sid.inpe.br/banon/2001/03.30.15.38</lasthostcollection>
		<url>http://sibgrapi.sid.inpe.br/rep-/sid.inpe.br/sibgrapi/2010/09.02.12.19</url>
	</metadata>
</metadatalist>